28 research outputs found
On Resource Pooling and Separation for LRU Caching
Caching systems using the Least Recently Used (LRU) principle have now become
ubiquitous. A fundamental question for these systems is whether the cache space
should be pooled together or divided to serve multiple flows of data item
requests in order to minimize the miss probabilities. In this paper, we show
that there is no straight yes or no answer to this question, depending on
complex combinations of critical factors, including, e.g., request rates,
overlapped data items across different request flows, data item popularities
and their sizes. Specifically, we characterize the asymptotic miss
probabilities for multiple competing request flows under resource pooling and
separation for LRU caching when the cache size is large.
Analytically, we show that it is asymptotically optimal to jointly serve
multiple flows if their data item sizes and popularity distributions are
similar and their arrival rates do not differ significantly; the
self-organizing property of LRU caching automatically optimizes the resource
allocation among them asymptotically. Otherwise, separating these flows could
be better, e.g., when data sizes vary significantly. We also quantify critical
points beyond which resource pooling is better than separation for each of the
flows when the overlapped data items exceed certain levels. Technically, we
generalize existing results on the asymptotic miss probability of LRU caching
for a broad class of heavy-tailed distributions and extend them to multiple
competing flows with varying data item sizes, which also validates the Che
approximation under certain conditions. These results provide new insights on
improving the performance of caching systems
Asymptotic Miss Ratio of LRU Caching with Consistent Hashing
To efficiently scale data caching infrastructure to support emerging big data
applications, many caching systems rely on consistent hashing to group a large
number of servers to form a cooperative cluster. These servers are organized
together according to a random hash function. They jointly provide a unified
but distributed hash table to serve swift and voluminous data item requests.
Different from the single least-recently-used (LRU) server that has already
been extensively studied, theoretically characterizing a cluster that consists
of multiple LRU servers remains yet to be explored. These servers are not
simply added together; the random hashing complicates the behavior. To this
end, we derive the asymptotic miss ratio of data item requests on a LRU cluster
with consistent hashing. We show that these individual cache spaces on
different servers can be effectively viewed as if they could be pooled together
to form a single virtual LRU cache space parametrized by an appropriate cache
size. This equivalence can be established rigorously under the condition that
the cache sizes of the individual servers are large. For typical data caching
systems this condition is common. Our theoretical framework provides a
convenient abstraction that can directly apply the results from the simpler
single LRU cache to the more complex LRU cluster with consistent hashing.Comment: 11 pages, 4 figure
Direction-oriented Multi-objective Learning: Simple and Provable Stochastic Algorithms
Multi-objective optimization (MOO) has become an influential framework in
many machine learning problems with multiple objectives such as learning with
multiple criteria and multi-task learning (MTL). In this paper, we propose a
new direction-oriented multi-objective problem by regularizing the common
descent direction within a neighborhood of a direction that optimizes a linear
combination of objectives such as the average loss in MTL. This formulation
includes GD and MGDA as special cases, enjoys the direction-oriented benefit as
in CAGrad, and facilitates the design of stochastic algorithms. To solve this
problem, we propose Stochastic Direction-oriented Multi-objective Gradient
descent (SDMGrad) with simple SGD type of updates, and its variant SDMGrad-OS
with an efficient objective sampling in the setting where the number of
objectives is large. For a constant-level regularization parameter ,
we show that SDMGrad and SDMGrad-OS provably converge to a Pareto stationary
point with improved complexities and milder assumptions. For an increasing
, this convergent point reduces to a stationary point of the linear
combination of objectives. We demonstrate the superior performance of the
proposed methods in a series of tasks on multi-task supervised learning and
reinforcement learning. Code is provided at
https://github.com/ml-opt-lab/sdmgrad